17 research outputs found

    On the Properties of Simulation-based Estimators in High Dimensions

    Full text link
    Considering the increasing size of available data, the need for statistical methods that control the finite sample bias is growing. This is mainly due to the frequent settings where the number of variables is large and allowed to increase with the sample size bringing standard inferential procedures to incur significant loss in terms of performance. Moreover, the complexity of statistical models is also increasing thereby entailing important computational challenges in constructing new estimators or in implementing classical ones. A trade-off between numerical complexity and statistical properties is often accepted. However, numerically efficient estimators that are altogether unbiased, consistent and asymptotically normal in high dimensional problems would generally be ideal. In this paper, we set a general framework from which such estimators can easily be derived for wide classes of models. This framework is based on the concepts that underlie simulation-based estimation methods such as indirect inference. The approach allows various extensions compared to previous results as it is adapted to possibly inconsistent estimators and is applicable to discrete models and/or models with a large number of parameters. We consider an algorithm, namely the Iterative Bootstrap (IB), to efficiently compute simulation-based estimators by showing its convergence properties. Within this framework we also prove the properties of simulation-based estimators, more specifically the unbiasedness, consistency and asymptotic normality when the number of parameters is allowed to increase with the sample size. Therefore, an important implication of the proposed approach is that it allows to obtain unbiased estimators in finite samples. Finally, we study this approach when applied to three common models, namely logistic regression, negative binomial regression and lasso regression

    A simple recipe for making accurate parametric inference in finite sample

    Full text link
    Constructing tests or confidence regions that control over the error rates in the long-run is probably one of the most important problem in statistics. Yet, the theoretical justification for most methods in statistics is asymptotic. The bootstrap for example, despite its simplicity and its widespread usage, is an asymptotic method. There are in general no claim about the exactness of inferential procedures in finite sample. In this paper, we propose an alternative to the parametric bootstrap. We setup general conditions to demonstrate theoretically that accurate inference can be claimed in finite sample

    A Flexible Bias Correction Method based on Inconsistent Estimators

    Full text link
    An important challenge in statistical analysis lies in controlling the estimation bias when handling the ever-increasing data size and model complexity. For example, approximate methods are increasingly used to address the analytical and/or computational challenges when implementing standard estimators, but they often lead to inconsistent estimators. So consistent estimators can be difficult to obtain, especially for complex models and/or in settings where the number of parameters diverges with the sample size. We propose a general simulation-based estimation framework that allows to construct consistent and bias corrected estimators for parameters of increasing dimensions. The key advantage of the proposed framework is that it only requires to compute a simple inconsistent estimator multiple times. The resulting Just Identified iNdirect Inference estimator (JINI) enjoys nice properties, including consistency, asymptotic normality, and finite sample bias correction better than alternative methods. We further provide a simple algorithm to construct the JINI in a computationally efficient manner. Therefore, the JINI is especially useful in settings where standard methods may be challenging to apply, for example, in the presence of misclassification and rounding. We consider comprehensive simulation studies and analyze an alcohol consumption data example to illustrate the excellent performance and usefulness of the method

    Accounting for Vibration Noise in Stochastic Measurement Errors

    Full text link
    The measurement of data over time and/or space is of utmost importance in a wide range of domains from engineering to physics. Devices that perform these measurements therefore need to be extremely precise to obtain correct system diagnostics and accurate predictions, consequently requiring a rigorous calibration procedure which models their errors before being employed. While the deterministic components of these errors do not represent a major modelling challenge, most of the research over the past years has focused on delivering methods that can explain and estimate the complex stochastic components of these errors. This effort has allowed to greatly improve the precision and uncertainty quantification of measurement devices but has this far not accounted for a significant stochastic noise that arises for many of these devices: vibration noise. Indeed, having filtered out physical explanations for this noise, a residual stochastic component often carries over which can drastically affect measurement precision. This component can originate from different sources, including the internal mechanics of the measurement devices as well as the movement of these devices when placed on moving objects or vehicles. To remove this disturbance from signals, this work puts forward a modelling framework for this specific type of noise and adapts the Generalized Method of Wavelet Moments to estimate these models. We deliver the asymptotic properties of this method when applied to processes that include vibration noise and show the considerable practical advantages of this approach in simulation and applied case studies.Comment: 30 pages, 9 figure

    Wavelet-Based Moment-Matching Techniques for Inertial Sensor Calibration

    Full text link
    The task of inertial sensor calibration has required the development of various techniques to take into account the sources of measurement error coming from such devices. The calibration of the stochastic errors of these sensors has been the focus of increasing amount of research in which the method of reference has been the so-called "Allan variance slope method" which, in addition to not having appropriate statistical properties, requires a subjective input which makes it prone to mistakes. To overcome this, recent research has started proposing "automatic" approaches where the parameters of the probabilistic models underlying the error signals are estimated by matching functions of the Allan variance or Wavelet Variance with their model-implied counterparts. However, given the increased use of such techniques, there has been no study or clear direction for practitioners on which approach is optimal for the purpose of sensor calibration. This paper formally defines the class of estimators based on this technique and puts forward theoretical and applied results that, comparing with estimators in this class, suggest the use of the Generalized Method of Wavelet Moments as an optimal choice

    Quantum invariants of 3-manifolds from a quantum group related to U_q(sl_3)

    No full text
    Dans cette thèse, une famille d'invariants quantiques de 3-variétés est construite par le biais de 6j-symboles provenant d'un groupe quantique lié à U_q(sl_3). Bien que ces 6j-symboles ne proviennent pas du groupe quantique U_q(sl_2), ils sont similaires à ceux de la théorie de Kashaev-Baseilhac-Benedetti du fait d'être construits à l'aide du dilogarithme quantique. En effet, ils dépendent aussi d'une seule variable permettant une interprétation en termes de paramètres tétraédriques de tétraèdres idéaux hyperboliques
    corecore